knowledge compilation
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (3 more...)
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (3 more...)
On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference
Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference algorithms have been proposed that reason at the first-order level, about groups of objects as a whole. Despite the existence of various lifted inference approaches, there are currently no completeness results about these algorithms. The key contribution of this paper is that we introduce a formal definition of lifted inference that allows us to reason about the completeness of lifted inference algorithms relative to a particular class of probabilistic models. We then show how to obtain a completeness result using a first-order knowledge compilation approach for theories of formulae containing up to two logical variables.
IASCAR: Incremental Answer Set Counting by Anytime Refinement
Fichte, Johannes K., Gaggl, Sarah Alice, Hecher, Markus, Rusovac, Dominik
Answer set programming (ASP) is a popular declarative programming paradigm with various applications. Programs can easily have many answer sets that cannot be enumerated in practice, but counting still allows quantifying solution spaces. If one counts under assumptions on literals, one obtains a tool to comprehend parts of the solution space, so-called answer set navigation. However, navigating through parts of the solution space requires counting many times, which is expensive in theory. Knowledge compilation compiles instances into representations on which counting works in polynomial time. However, these techniques exist only for CNF formulas, and compiling ASP programs into CNF formulas can introduce an exponential overhead. This paper introduces a technique to iteratively count answer sets under assumptions on knowledge compilations of CNFs that encode supported models. Our anytime technique uses the inclusion-exclusion principle to improve bounds by over- and undercounting systematically. In a preliminary empirical analysis, we demonstrate promising results. After compiling the input (offline phase), our approach quickly (re)counts.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York > New York County > New York City (0.14)
- Europe > Germany (0.04)
- (20 more...)
Tractable Bounding of Counterfactual Queries by Knowledge Compilation
Huber, David, Chen, Yizuo, Antonucci, Alessandro, Darwiche, Adnan, Zaffalon, Marco
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models. A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters. Such a method requires multiple (Bayesian network) queries over models sharing the same structural equations and topology, but different exogenous probabilities. This setup makes a compilation of the underlying model to an arithmetic circuit advantageous, thus inducing a sizeable inferential speed-up. We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values when computing the different queries. We also discuss parallelisation techniques to further speed up the bound computation. Experiments against standard Bayesian network inference show clear computational advantages with up to an order of magnitude of speed-up.
Scaling Integer Arithmetic in Probabilistic Programs
Cao, William X., Garg, Poorva, Tjoa, Ryan, Holtzen, Steven, Millstein, Todd, Broeck, Guy Van den
These approximate inference strategies can scale well in many cases, but they Distributions on integers are ubiquitous in probabilistic struggle to find valid sampling regions in the presence of modeling but remain challenging for many low-probability observations and non-differentiability (e.g., of today's probabilistic programming languages observing the sum of two large random integers to be a (PPLs). The core challenge comes from discrete constant) [Gelman et al., 2015, Bingham et al., 2019, Dillon structure: many of today's PPL inference strategies et al., 2017]. Exact inference strategies work by preserving rely on enumeration, sampling, or differentiation the global structure of the distribution, but here there is a in order to scale, which fail for high-dimensional challenge: what is the right strategy for efficiently representing complex discrete distributions involving integers.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
Neural Probabilistic Logic Programming in Discrete-Continuous Domains
De Smet, Lennert, Martires, Pedro Zuidberg Dos, Manhaeve, Robin, Marra, Giuseppe, Kimmig, Angelika, De Raedt, Luc
Neural-symbolic AI (NeSy) allows neural networks to exploit symbolic background knowledge in the form of logic. It has been shown to aid learning in the limited data regime and to facilitate inference on out-of-distribution data. Probabilistic NeSy focuses on integrating neural networks with both logic and probability theory, which additionally allows learning under uncertainty. A major limitation of current probabilistic NeSy systems, such as DeepProbLog, is their restriction to finite probability distributions, i.e., discrete random variables. In contrast, deep probabilistic programming (DPP) excels in modelling and optimising continuous probability distributions. Hence, we introduce DeepSeaProbLog, a neural probabilistic logic programming language that incorporates DPP techniques into NeSy. Doing so results in the support of inference and learning of both discrete and continuous probability distributions under logical constraints. Our main contributions are 1) the semantics of DeepSeaProbLog and its corresponding inference algorithm, 2) a proven asymptotically unbiased learning algorithm, and 3) a series of experiments that illustrate the versatility of our approach.
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Sweden > Örebro County > Örebro (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- (2 more...)
Oztok
Knowledge compilation has been successfully used to solve beyond NP problems, including some PP-complete and NPPP-complete problems for Bayesian networks. In this work we show how knowledge compilation can be used to solve problems in the more intractable complexity class PP PP. This class contains NPPP and includes interesting AI problems, such as non-myopic value of information. We show how to solve the prototypical PPPP-complete problem MajMajsat in linear-time once the problem instance is compiled into a special class of Sentential Decision Diagrams. To show the practical value of our approach, we adapt it to answer the Same-Decision Probability (SDP) query, which was recently introduced for Bayesian networks. The SDP problem is also PPPPP-complete. It is a value-of-information query that quantifies the robustness of threshold-based decisions and comes with a corresponding algorithm that was also recently proposed. We present favorable experimental results, comparing our new algorithm based on knowledge compilation with the state-of-the-art algorithm for computing the SDP.
Tractable Boolean and Arithmetic Circuits
Tractable Boolean and arithmetic circuits have been studied extensively in AI for over two decades now. These circuits were initially proposed as "compiled objects," meant to facilitate logical and probabilistic reasoning, as they permit various types of inference to be performed in linear-time and a feed-forward fashion like neural networks. In more recent years, the role of tractable circuits has significantly expanded as they became a computational and semantical backbone for some approaches that aim to integrate knowledge, reasoning and learning. In this article, we review the foundations of tractable circuits and some associated milestones, while focusing on their core properties and techniques that make them particularly useful for the broad aims of neuro-symbolic AI.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.88)
On the Complexity of SHAP-Score-Based Explanations: Tractability via Knowledge Compilation and Non-Approximability Results
Arenas, Marcelo, Barceló, Pablo, Bertossi, Leopoldo, Monet, Mikaël
In Machine Learning, the $\mathsf{SHAP}$-score is a version of the Shapley value that is used to explain the result of a learned model on a specific entity by assigning a score to every feature. While in general computing Shapley values is an intractable problem, we prove a strong positive result stating that the $\mathsf{SHAP}$-score can be computed in polynomial time over deterministic and decomposable Boolean circuits. Such circuits are studied in the field of Knowledge Compilation and generalize a wide range of Boolean circuits and binary decision diagrams classes, including binary decision trees and Ordered Binary Decision Diagrams (OBDDs). We also establish the computational limits of the SHAP-score by observing that computing it over a class of Boolean models is always polynomially as hard as the model counting problem for that class. This implies that both determinism and decomposability are essential properties for the circuits that we consider. It also implies that computing $\mathsf{SHAP}$-scores is intractable as well over the class of propositional formulas in DNF. Based on this negative result, we look for the existence of fully-polynomial randomized approximation schemes (FPRAS) for computing $\mathsf{SHAP}$-scores over such class. In contrast to the model counting problem for DNF formulas, which admits an FPRAS, we prove that no such FPRAS exists for the computation of $\mathsf{SHAP}$-scores. Surprisingly, this negative result holds even for the class of monotone formulas in DNF. These techniques can be further extended to prove another strong negative result: Under widely believed complexity assumptions, there is no polynomial-time algorithm that checks, given a monotone DNF formula $\varphi$ and features $x,y$, whether the $\mathsf{SHAP}$-score of $x$ in $\varphi$ is smaller than the $\mathsf{SHAP}$-score of $y$ in $\varphi$.
- South America > Chile (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (3 more...)